A POMDP Approach to Robot Motion Planning under Uncertainty

نویسندگان

  • Yanzhu Du
  • David Hsu
  • Hanna Kurniawati
  • Wee Sun Lee
  • Sylvie C.W. Ong
  • Shao Wei Png
چکیده

Motion planning in uncertain and dynamic environments is critical for reliable operation of autonomous robots. Partially observable Markov decision processes (POMDPs) provide a principled general framework for such planning tasks and have been successfully applied to several moderately complex robotic tasks, including navigation, manipulation, and target tracking. The challenge now is to scale up POMDP planning algorithms and handle more complex, realistic tasks. This paper outlines ideas aimed at overcoming two major obstacles to the efficiency of POMDP planning: the “curse of dimensionality” and the “curse of history”. Our main objective is to show that using these ideas— along with others—POMDP algorithms can be used successfully for motion planning under uncertainty for robotic tasks with a large number of states or a long time horizon. We implemented some of our algorithms as a software package Approximate POMDP Planning Library (APPL), now available for download at http://motion.comp.nus.edu.sg/ projects/pomdp/pomdp.html.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Monte Carlo Value Iteration for Continuous-State POMDPs

Partially observable Markov decision processes (POMDPs) have been successfully applied to various robot motion planning tasks under uncertainty. However, most existing POMDP algorithms assume a discrete state space, while the natural state space of a robot is often continuous. This paper presents Monte Carlo Value Iteration (MCVI) for continuous-state POMDPs. MCVI samples both a robot’s state s...

متن کامل

Point-Based Policy Transformation: Adapting Policy to Changing POMDP Models

Motion planning under uncertainty that can efficiently take into account changes in the environment is critical for robots to operate reliably in our living spaces. Partially Observable Markov Decision Process (POMDP) provides a systematic and general framework for motion planning under uncertainty. Point-based POMDP has advanced POMDP planning tremendously over the past few years, enabling POM...

متن کامل

Leveraging Task Knowledge for Robot Motion Planning Under Uncertainty

Noisy observations coupled with nonlinear dynamics pose one of the biggest challenges in robot motion planning. By decomposing the nonlinear dynamics into a discrete set of local dynamics models, hybrid dynamics provide a natural way to model nonlinear dynamics, especially in systems with sudden “jumps” in the dynamics, due to factors such as contacts. We propose a hierarchical POMDP planner th...

متن کامل

Tractable Planning under Uncertainty: Exploiting Structure

T HE problem of planning under uncertainty has received significant attention in the scientific community over the past few years. It is now well-recognized that considering uncertainty during planning and decision-making is imperative to the design of robust computer systems. This is particularly crucial in robotics, where the ability to interact effectively with real-world environments is a p...

متن کامل

Motion Planning under Uncertainty for Robotic Tasks with Long Time Horizons

Motion planning with imperfect state information is a crucial capability for autonomous robots to operate reliably in uncertain and dynamic environments. Partially observable Markov decision processes (POMDPs) provide a principled general framework for planning under uncertainty. Using probabilistic sampling, point-based POMDP solvers have drastically improved the speed of POMDP planning, enabl...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2010